315 research outputs found
On the Doubly Sparse Compressed Sensing Problem
A new variant of the Compressed Sensing problem is investigated when the
number of measurements corrupted by errors is upper bounded by some value l but
there are no more restrictions on errors. We prove that in this case it is
enough to make 2(t+l) measurements, where t is the sparsity of original data.
Moreover for this case a rather simple recovery algorithm is proposed. An
analog of the Singleton bound from coding theory is derived what proves
optimality of the corresponding measurement matrices.Comment: 6 pages, IMACC2015 (accepted
Adaptive Measurement Network for CS Image Reconstruction
Conventional compressive sensing (CS) reconstruction is very slow for its
characteristic of solving an optimization problem. Convolu- tional neural
network can realize fast processing while achieving compa- rable results. While
CS image recovery with high quality not only de- pends on good reconstruction
algorithms, but also good measurements. In this paper, we propose an adaptive
measurement network in which measurement is obtained by learning. The new
network consists of a fully-connected layer and ReconNet. The fully-connected
layer which has low-dimension output acts as measurement. We train the
fully-connected layer and ReconNet simultaneously and obtain adaptive
measurement. Because the adaptive measurement fits dataset better, in contrast
with random Gaussian measurement matrix, under the same measuremen- t rate, it
can extract the information of scene more efficiently and get better
reconstruction results. Experiments show that the new network outperforms the
original one.Comment: 11pages,8figure
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
Minimizing Acquisition Maximizing Inference -- A demonstration on print error detection
Is it possible to detect a feature in an image without ever looking at it?
Images are known to have sparser representation in Wavelets and other similar
transforms. Compressed Sensing is a technique which proposes simultaneous
acquisition and compression of any signal by taking very few random linear
measurements (M). The quality of reconstruction directly relates with M, which
should be above a certain threshold for a reliable recovery. Since these
measurements can non-adaptively reconstruct the signal to a faithful extent
using purely analytical methods like Basis Pursuit, Matching Pursuit, Iterative
thresholding, etc., we can be assured that these compressed samples contain
enough information about any relevant macro-level feature contained in the
(image) signal. Thus if we choose to deliberately acquire an even lower number
of measurements - in order to thwart the possibility of a comprehensible
reconstruction, but high enough to infer whether a relevant feature exists in
an image - we can achieve accurate image classification while preserving its
privacy. Through the print error detection problem, it is demonstrated that
such a novel system can be implemented in practise
Transferring Learning from External to Internal Weights in Echo-State Networks with Sparse Connectivity
Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this “transfer of learning” is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a “self-sensing” network state, and we compare and contrast this with compressed sensing
- …